23 research outputs found

    Camera motion estimation through planar deformation determination

    Get PDF
    In this paper, we propose a global method for estimating the motion of a camera which films a static scene. Our approach is direct, fast and robust, and deals with adjacent frames of a sequence. It is based on a quadratic approximation of the deformation between two images, in the case of a scene with constant depth in the camera coordinate system. This condition is very restrictive but we show that provided translation and depth inverse variations are small enough, the error on optical flow involved by the approximation of depths by a constant is small. In this context, we propose a new model of camera motion, that allows to separate the image deformation in a similarity and a ``purely'' projective application, due to change of optical axis direction. This model leads to a quadratic approximation of image deformation that we estimate with an M-estimator; we can immediatly deduce camera motion parameters.Comment: 21 pages, version modifi\'ee accept\'e le 20 mars 200

    Data fusion by segmentation. Application to texture discrimination

    No full text
    Nous présentons un algorithme de segmentation multi-canaux qui est déduit de la fonctionnelle de Mumford et Shah. La méthode est un algorithme de croissance sur une image "vectorielle". Les applications présentées sont la discrimination de textures et la détection d'objets sur fond naturel

    Segmentation by minimizing functionals and the merging methods

    No full text

    Data fusion by segmentation. Application to texture discrimination.

    No full text
    Introduction. Image processors today are often presented with multiple data for the same scene, obtained using various sensors. Typical examples include satellite pictures using multispectral data or medical data such as MR images. It thus seems quite natural to include this information in image processing algorithms. This not only should improve analysis of the data but should also help to obtain more stable algorithms since there is more information available. Another way to obtain multichannel input is by preprocessing a textured picture. Indeed in order to characterize textures the common method is to get information about orientation, terminators, corners and so on. We present a multidimensional segmentation algorithm which has been presented in [KMS] for the gray-level case. This note completes the description of the possibilities of the simple segmentation model given by the Mumford and Shah functional (see [MS]) and the corresponding region growing algorithm. The outl

    Combinatorial pyramids and discrete geometry for energy-minimizing segmentation

    No full text
    International audienceThis paper defines the basis of a new hierarchical framework for segmentation algorithms based on energy minimization schemes. This new framework is based on two formal tools. First, a combinatorial pyramid encode efficiently a hierarchy of partitions. Secondly, discrete geometric estimators measure precisely some important geometric parameters of the regions. These measures combined with photometrical and topological features of the partition allows to design energy terms based on discrete measures. Our segmentation framework exploits these energies to build a pyramid of image partitions with a minimization scheme. Some experiments illustrating our framework are shown and discussed

    Multi-Scale Improves Boundary Detection in Natural Images

    No full text
    Abstract. In this work we empirically study the multi-scale boundary detection problem in natural images. We utilize local boundary cues including contrast, localization and relative contrast, and train a classifier to integrate them across scales. Our approach successfully combines strengths from both large-scale detection (robust but poor localization) and small-scale detection (detail-preserving but sensitive to clutter). We carry out quantitative evaluations on a variety of boundary and object datasets with human-marked groundtruth. We show that multi-scale boundary detection offers large improvements, ranging from 20% to 50%, over single-scale approaches. This is the first time that multi-scale is demonstrated to improve boundary detection on large datasets of natural images.
    corecore